19 research outputs found

    Human Activity Recognition by Sequences of Skeleton Features

    Get PDF
    In recent years, much effort has been devoted to the development of applications capable of detecting different types of human activity. In this field, fall detection is particularly relevant, especially for the elderly. On the one hand, some applications use wearable sensors that are integrated into cell phones, necklaces or smart bracelets to detect sudden movements of the person wearing the device. The main drawback of these types of systems is that these devices must be placed on a person’s body. This is a major drawback because they can be uncomfortable, in addition to the fact that these systems cannot be implemented in open spaces and with unfamiliar people. In contrast, other approaches perform activity recognition from video camera images, which have many advantages over the previous ones since the user is not required to wear the sensors. As a result, these applications can be implemented in open spaces and with unknown people. This paper presents a vision-based algorithm for activity recognition. The main contribution of this work is to use human skeleton pose estimation as a feature extraction method for activity detection in video camera images. The use of this method allows the detection of multiple people’s activities in the same scene. The algorithm is also capable of classifying multi-frame activities, precisely for those that need more than one frame to be detected. The method is evaluated with the public UP-FALL dataset and compared to similar algorithms using the same dataset.This research was supported in part by the Chilean Research and Development Agency (ANID) under Project FONDECYT 1191188. The National University of Distance Education under Project 2021V/-TAJOV/00 and Ministry of Science and Innovation of Spain under Project PID2019-108377RB-C32

    BERT for Activity Recognition Using Sequences of Skeleton Features and Data Augmentation with GAN

    Get PDF
    Recently, the scientific community has placed great emphasis on the recognition of human activity, especially in the area of health and care for the elderly. There are already practical applications of activity recognition and unusual conditions that use body sensors such as wrist-worn devices or neck pendants. These relatively simple devices may be prone to errors, might be uncomfortable to wear, might be forgotten or not worn, and are unable to detect more subtle conditions such as incorrect postures. Therefore, other proposed methods are based on the use of images and videos to carry out human activity recognition, even in open spaces and with multiple people. However, the resulting increase in the size and complexity involved when using image data requires the use of the most recent advanced machine learning and deep learning techniques. This paper presents an approach based on deep learning with attention to the recognition of activities from multiple frames. Feature extraction is performed by estimating the pose of the human skeleton, and classification is performed using a neural network based on Bidirectional Encoder Representation of Transformers (BERT). This algorithm was trained with the UP-Fall public dataset, generating more balanced artificial data with a Generative Adversarial Neural network (GAN), and evaluated with real data, outperforming the results of other activity recognition methods using the same dataset.This research was supported in part by the Chilean Research and Development Agency (ANID) under Project FONDECYT 1191188, The National University of Distance Education under Projects 2021V/-TAJOV/00 and OPTIVAC 096-034091 2021V/PUNED/008, and the Ministry of Science and Innovation of Spain under Project PID2019-108377RB-C32

    Fall detection and activity recognition using human skeleton features

    Get PDF
    Human activity recognition has attracted the attention of researchers around the world. This is an interesting problem that can be addressed in different ways. Many approaches have been presented during the last years. These applications present solutions to recognize different kinds of activities such as if the person is walking, running, jumping, jogging, or falling, among others. Amongst all these activities, fall detection has special importance because it is a common dangerous event for people of all ages with a more negative impact on the elderly population. Usually, these applications use sensors to detect sudden changes in the movement of the person. These kinds of sensors can be embedded in smartphones, necklaces, or smart wristbands to make them “wearable” devices. The main inconvenience is that these devices have to be placed on the subjects’ bodies. This might be uncomfortable and is not always feasible because this type of sensor must be monitored constantly, and can not be used in open spaces with unknown people. In this way, fall detection from video camera images presents some advantages over the wearable sensor-based approaches. This paper presents a vision-based approach to fall detection and activity recognition. The main contribution of the proposed method is to detect falls only by using images from a standard video-camera without the need to use environmental sensors. It carries out the detection using human skeleton estimation for features extraction. The use of human skeleton detection opens the possibility for detecting not only falls but also different kind of activities for several subjects in the same scene. So this approach can be used in real environments, where a large number of people may be present at the same time. The method is evaluated with the UP-FALL public dataset and surpasses the performance of other fall detection and activities recognition systems that use that dataset

    Event-Based Control Strategy for Mobile Robots in Wireless Environments

    No full text
    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy

    Position Control of a Mobile Robot through Deep Reinforcement Learning

    No full text
    This article proposes the use of reinforcement learning (RL) algorithms to control the position of a simulated Kephera IV mobile robot in a virtual environment. The simulated environment uses the OpenAI Gym library in conjunction with CoppeliaSim, a 3D simulation platform, to perform the experiments and control the position of the robot. The RL agents used correspond to the deep deterministic policy gradient (DDPG) and deep Q network (DQN), and their results are compared with two control algorithms called Villela and IPC. The results obtained from the experiments in environments with and without obstacles show that DDPG and DQN manage to learn and infer the best actions in the environment, allowing us to effectively perform the position control of different target points and obtain the best results based on different metrics and indices

    Position Control of a Mobile Robot through Deep Reinforcement Learning

    No full text
    This article proposes the use of reinforcement learning (RL) algorithms to control the position of a simulated Kephera IV mobile robot in a virtual environment. The simulated environment uses the OpenAI Gym library in conjunction with CoppeliaSim, a 3D simulation platform, to perform the experiments and control the position of the robot. The RL agents used correspond to the deep deterministic policy gradient (DDPG) and deep Q network (DQN), and their results are compared with two control algorithms called Villela and IPC. The results obtained from the experiments in environments with and without obstacles show that DDPG and DQN manage to learn and infer the best actions in the environment, allowing us to effectively perform the position control of different target points and obtain the best results based on different metrics and indices

    Comparative Study of Traditional, Simulated and Real Online Remote Laboratory: Student's Perceptions in Technical Training of Electronics

    No full text
    The developments in technology and communication networks have enabled the possibility of establishing virtual and remote labs, providing new opportunities for students on campus and at a distance overcoming some of the limitations of hands-on labs. The impact of innovations on students' performance can be analyzed statistically by looking at specific skills or indicators, respectively. This paper addresses the lack of empirical evidence supporting electronics education innovations in three practical teaching methods, namely, hands-on, simulation, and online remote real labs. The paper reports on the application of a methodology that takes into account the interaction between students and teachers at different levels of abstraction to evaluate a DC motor laboratory practice, on 150 students at the Polydisciplinary Faculty of Beni Mellal in Morocco. In this work the students' attitudes towards a specific practical method depend on its usefulness, usability, motivation and quality of understanding; these parameters were measured using a questionnaire that considers the relationship between the student, the teacher and the practical work environment. The data collected in each type of experiment environment were was tabulated and analyzed by statistical methods. The results validate the students' satisfaction towards the environments of practical works and identify some aspects that need to be improved in future works

    BERT for Activity Recognition Using Sequences of Skeleton Features and Data Augmentation with GAN

    No full text
    Recently, the scientific community has placed great emphasis on the recognition of human activity, especially in the area of health and care for the elderly. There are already practical applications of activity recognition and unusual conditions that use body sensors such as wrist-worn devices or neck pendants. These relatively simple devices may be prone to errors, might be uncomfortable to wear, might be forgotten or not worn, and are unable to detect more subtle conditions such as incorrect postures. Therefore, other proposed methods are based on the use of images and videos to carry out human activity recognition, even in open spaces and with multiple people. However, the resulting increase in the size and complexity involved when using image data requires the use of the most recent advanced machine learning and deep learning techniques. This paper presents an approach based on deep learning with attention to the recognition of activities from multiple frames. Feature extraction is performed by estimating the pose of the human skeleton, and classification is performed using a neural network based on Bidirectional Encoder Representation of Transformers (BERT). This algorithm was trained with the UP-Fall public dataset, generating more balanced artificial data with a Generative Adversarial Neural network (GAN), and evaluated with real data, outperforming the results of other activity recognition methods using the same dataset

    Comparative Study of Traditional, Simulated and Real Online Remote Laboratory: Student's Perceptions in Technical Training of Electronics

    No full text
    The developments in technology and communication networks have enabled the possibility of establishing virtual and remote labs, providing new opportunities for students on campus and at a distance overcoming some of the limitations of hands-on labs. The impact of innovations on students' performance can be analyzed statistically by looking at specific skills or indicators, respectively. This paper addresses the lack of empirical evidence supporting electronics education innovations in three practical teaching methods, namely, hands-on, simulation, and online remote real labs. The paper reports on the application of a methodology that takes into account the interaction between students and teachers at different levels of abstraction to evaluate a DC motor laboratory practice, on 150 students at the Polydisciplinary Faculty of Beni Mellal in Morocco. In this work the students' attitudes towards a specific practical method depend on its usefulness, usability, motivation and quality of understanding; these parameters were measured using a questionnaire that considers the relationship between the student, the teacher and the practical work environment. The data collected in each type of experiment environment were was tabulated and analyzed by statistical methods. The results validate the students' satisfaction towards the environments of practical works and identify some aspects that need to be improved in future works

    A Distributed Vision-Based Navigation System for Khepera IV Mobile Robots

    No full text
    This work presents the development and implementation of a distributed navigation system based on object recognition algorithms. The main goal is to introduce advanced algorithms for image processing and artificial intelligence techniques for teaching control of mobile robots. The autonomous system consists of a wheeled mobile robot with an integrated color camera. The robot navigates through a laboratory scenario where the track and several traffic signals must be detected and recognized by using the images acquired with its on-board camera. The images are sent to a computer server that performs a computer vision algorithm to recognize the objects. The computer calculates the corresponding speeds of the robot according to the object detected. The speeds are sent back to the robot, which acts to carry out the corresponding manoeuvre. Three different algorithms have been tested in simulation and a practical mobile robot laboratory. The results show an average of 84% success rate for object recognition in experiments with the real mobile robot platform
    corecore